We have quantified the speed advantage of Binary Search by showing its vastly reduced number of "steps" required—$O(\log_2(n))$. To truly transition from mathematical complexity to real-world performance, we must understand what a "step" represents inside the machine: the execution of a low-level instruction by the hardware.

  • Every algorithm, regardless of complexity, must be executed by the Central Processing Unit (CPU). We model the time $f(n)$ required for an algorithm as the total count of these fundamental instructions (or operations) performed.
  • The CPU relies on a core architecture to process data like our search array $A$:
    • Main Memory (RAM): Stores the input data, such as array $A = [4, 8, 15, 16, 23, 42]$, and the algorithm's instructions.
    • Control Unit (CU): Responsible for fetching instructions from memory and directing all other components. It manages the program flow.
    • Arithmetic Logic Unit (ALU): Performs all computations (like pointer arithmetic for `mid`) and the critical comparison steps (e.g., checking if $A[mid] < t$).
    • Registers: Extremely fast, small storage locations within the CPU used to temporarily hold data and memory addresses needed for the current instruction being executed.

Key Takeaway

Analyzing time complexity is essentially counting how many times the ALU is called upon to perform an operation, and how many times data must be moved between Memory and Registers.